Search Results: "Sylvain Le Gall"

17 July 2012

Sylvain Le Gall: Coding by example, migrating ocsoap to OASIS and ocamlbuild

In my effort to automate most of the stuff needed to release, I am working on a SOAP to OCaml converter. Richard W.M. Jones has kindly allowed me to take-over its project "ocsoap". The project is now located here and I am working on making it compatible with FusionForge SOAP. My first step was to make it compatible with OASIS and ocamlbuild. The project is Makefile based and needs some extra care to make it ocamlbuild based. The oasis-fication itself was a piece of cake. Just describe the dependencies of the project and make some extra choice like compiling the camlp5 extension to a .cma rather than a .cmo. You can see the _oasis and myocamlbuild.ml results, compared to the initial Makefile. The _oasis is simple:
OASISFormat:  0.3
Name:         ocsoap
Version:      0.7.1
Synopsis:     SOAP converter to OCaml code
Authors:      Richard W.M. Jones, Sylvain Le Gall
License:      LGPL-2.1 with OCaml linking exception
Plugins:      DevFiles (0.3), META (0.3), 
              StdFiles (0.3)
BuildTools:   ocamlbuild,
              cduce
BuildDepends: dynlink,
              pxp-lex-utf8,
              pxp-engine,
              netclient,
              cduce,
              extlib,
              calendar,
              pcre
Library ocsoap
  Path:       src
  Modules:    OCSoap
Library pa_ocsoapclientstubs
  Path:          src
  Modules:       Pa_ocsoapclientstubs
  BuildTools:    camlp5o
  BuildDepends:  camlp5
  FindlibParent: ocsoap
  FindlibName:   syntax
  CompiledObject: byte
  
Executable wsdl_validate
  Path:       src
  MainIs:     wsdl_validate.ml
  
Executable wsdltointf
  Path:       src
  MainIs:     wsdltointf.ml
Executable adwords_test1
  Path: examples/adwords
  BuildDepends: ocsoap
  MainIs: test1.ml
  Build$: flag(tests)
  Install: false
  BuildTools: camlp5o
  BuildDepends: ocsoap.syntax
Executable adwords_test2 
  Path: examples/adwords
  BuildDepends: ocsoap
  MainIs: test2.ml
  Build$: flag(tests)
  Install: false
  BuildTools: camlp5o
  BuildDepends: ocsoap.syntax
Executable adwords_examples 
  Path: examples/adwords
  BuildDepends: ocsoap
  MainIs: example.ml
  Build$: flag(tests)
  Install: false
  BuildTools: camlp5o
  BuildDepends: ocsoap.syntax
Document "api-ocsoap"
  Title: API reference of OCSoap
  Type: ocamlbuild (0.3)
  BuildTools+: ocamldoc
  XOCamlbuildLibraries: ocsoap
  XOCamlbuildPath:      src/
Most of it was generated using oasis quickstart, that helps you to write _oasis. This part was not tricky and mostly a translation of what the previous Makefile was explaining, in its language. Now comes the tricky part, translating Makefile rules into ocamlbuild rules. This part is more tricky because the general principle behind Makefile and ocamlbuild are not exactly the same. Let's start with a simple rule: compiling a CDuce .cd file into a .cdo. Here is the Makefile rule:
%.cdo: %.cd
        $(CDUCE) --compile $<
In the oasis-fication, we have added an extra complexity, because we move the source to src, which needs extra flags. Here is the ocamlbuild rule:
rule "cduce: %.cd -> %.cdo"
  ~prod:"%.cdo"
  ~dep:"%.cd"
  begin
    fun env build ->
      Cmd(S[cduce;
            T(tags_of_pathname (env "%.cd")++"cduce"++"compile");
            A"--compile";
            A"-I"; P(Filename.dirname (env "%.cd"));
            P (env "%.cd")])
  end
;;
It is a little bit more complex, lets explain it. The function rule create a rule with a name and a function that execute it. The labels ~prod and ~dep are the right and left parts of %.cdo: %.cd of the Makefile. When we call env "%cd", it replaces the % by the matching part of ~prod and ~dep. Next we return a Rule.action. The rule in this case is a command Cmd. This rules use a sequence S, which are the command line itself. The sequence is made of smaller pieces that can be a placeholder for tag content (T + what will trigger it), atom (A, typically command line option), and filename (P). For a file src/foo.ml, the command generated will
cduce $(TAG) --compile -I src src/foo.ml
where $(TAG) will be replaced by the content of flag ["file:src/foo.ml"; "cduce"; "compile"] content. But also by the content of flag ["cduce"; "compile"] content, the flag just need to be subset. For example if you want to add --verbose to the command line, just add flag ["cduce"; "compile"] (S[A"--verbose"]);; somewhere in myocamlbuild.ml. Next rules, we need to compile %.cdo to %.cmo. It is trickier because in this case, we want to have %.cmi eventually built before. It is not require to build it before, if there is no %.mli file. Here is the code to do that:
let cduce_mkstubs includes =
  Quote (S([cduce;
            S(List.map
                (fun fn -> S[A"-I"; P fn])
                includes);
            A"--mlstub"]));;
(* cduce < 0.3.9:  cdo2ml -static *) 
let cduce_compile_args env =
  [
    A"-c"; A"-package"; A"cduce";
    A"-pp"; cduce_mkstubs [Filename.dirname (env "%.cmi")];
    A"-I"; P(Filename.dirname (env "%.cmi"));
    A"-impl"; P(env "%.cdo")
  ]
;;
let cduce_prepare_compile env build =
  List.iter
    (function
         Outcome.Bad _ ->
           (* Fail but it just means that the .cmi will be generated 
            * during compilation.
            *)
           ()
         Outcome.Good _ ->
           ())
    (build [[env "%.cmi"]])
;;
rule "cduce: %.cdo -> %.cmo"
  ~prod:"%.cmo"
  ~dep:"%.cdo"
  begin
    fun env build ->
      cduce_prepare_compile env build;
      Cmd(S(
        [ocamlfind; ocamlc;
         T(tags_of_pathname (env "%.cdo")
           ++"cduce"++"ocamlc"++"compile"++"byte")]
        @ (cduce_compile_args env)))
  end
;;
The difference with the former rules, is that we call cduce_prepare_compile which in turns call (build [[env "%.cmi"]]). The call to build will ask ocamlbuild to compile src/foo.cmi, but we don't care about the result, i.e. in case of Outcome.Bad exn we don't fail. This way we don't stop the build process that will continue and produce a .cmo and .cmi just out of the .cdo. The .cdo itself is compiled as a .ml file with ocamlfind ocamlc except that we apply cduce --mlstub as a preprocessor A"-pp"; Quote(S[cduce; ..; A"--mlstub"]). The last rule that I will comment is the one that transform a .intf into a .ml. This rule is particular because it is totally different than the pieces of code of the original Makefile. Here are the couple of rules that was needed to translate .intf:
examples/adwords/%Service.cmx: examples/adwords/%Service.intf                                                       
   ocamlfind ocamlopt $(OCAMLOPTFLAGS) -c \                                                                          
     -pp "camlp5o ./pa_ocsoapclientstubs.cmo -impl" -c -impl $<
.depend: $(wildcard *.mli) $(wildcard *.ml) \
  $(wildcard examples/adwords/*.mli) $(wildcard examples/adwords/*.ml)
  $(OCAMLDEP) $^ > .depend
  for f in examples/adwords/*.intf; do \
    $(OCAMLDEP) \
    -pp "camlp5o ./pa_ocsoapclientstubs.cmo pr_o.cmo -impl" $$f \
    >> .depend; \
  done
Here is the ocamlbuild rule:
rule "ocsoap: %.intf -> %.ml"
  ~prod:"%.ml"
  ~deps:(if !ocsoap_dev then
           ["%.intf"; !pa_ocsoapclientstubs]
         else
           ["%.intf"])
  begin
    fun env build ->
      Cmd(S[Px !camlp5o; P !pa_ocsoapclientstubs; P "pr_o.cmo"; A"-impl";
            P(env "%.intf"); A"-o"; P(env "%.ml")])
  end
;;
Here we decided to translate directly the .intf into a .ml file. The good thing about ocamlbuild is that it has a powerfull dynamicy dependencies scheme. So here you will generate the .ml file, which in turns will generate a .ml.depends and then will be compiled the standard way. In the makefile, you need to compute the .depends file using a different process that will do everything before the compilation even starts (in fact, before the inclusion of the .depends). We also use the trick to use an OCaml printer (pr_o.cmo) with camlp5o, that will directly output the standard .ml file. Don't hesitate to post comment if you have question about OASIS and ocamlbuild Enjoy.

28 June 2012

Sylvain Le Gall: OASIS 0.3.0 release

Logo OASIS small OASIS is a tool to integrate a configure, build and install system in your OCaml project. It helps to create standard entry points in your build system and allows external tools to analyse your project easily. It is the building brick of OASIS-DB, a CPAN for OCaml. This release takes almost 18 months to complete. This is too long and I will talk in another blog post on the way I am trying to improve this right now (esp. using continuous integration). This new release fixes a small bug (1 line) that prevents setup.ml to run with OCaml 4.00. If you have projects that was generated with a former release consider upgrading to OASIS 0.3.0. There are several big new stuff that comes with this release. It now supports Pack: field for libraries which allows to pack your library using -for-pack and so on. We are also compiling .cmxs (dynlink object for native) by default and we publish them in the META file. This feature was implemented in order to get more libraries to provide .cmxs and to help project like Ocsigen to take advantage of this. If you want to get rid of this at configure time, you can use ./configure --override native_dynlink false. We introduce two new default flags: --enable-tests and --disable-docs for configure. These are implicit flags that define if we will run Test sections or compile Document sections. They are especially useful to reduce the number of dependencies, because dependencies of Test will be excluded by default. We recommend to set Build$: flag(tests) to any Library or Executable sections that are only useful for tests. This allows you to really cut down the number of dependencies. The last change I want to introduce is about the old setup-dev subcommand which is now deprecated. It has been replaced by 2 different update schemes. I am pretty excited by this feature which in fact comes from OASIS user (esp. Lwt project). The former scheme was to have a big 'setup.ml' that always call the command oasis to update. This was complex and not very useful. We now have 2 mode: dynamic and weak. dynamic allow you to have a small setup.ml and to keep your VCS history clean but you need to install OASIS. weak need a big setup.ml but only need to call the command oasis if someone change something in _oasis. This mode is targeted to project that wants to be able to checkout from VCS an OCaml project without installing OASIS. The difference generated by weak mode doesn't pollute too much the VCS history because most the time they make sense. For example, you upgrade your package version number in _oasis and it produces a change with 6 lines where the version number changes in every META, setup.ml files and so on. This version has been tested on: You can download it here or use your favorite package manager: Debian (UPDATE: pending upload)
$ apt-get install -t experimental oasis
GODI
$ godi_console perform -build apps-oasis
odb.ml
$ ocaml odb.ml oasis
Here is the complete changelog: EXTREMLY IMPORTANT changes (read this) PACKAGES uploaded to oasis-db will be automatically "derived" before OCaml 4.00 release (i.e. oUnit v1.1.1 will be regenerated with this new version as oUnit v1.1.1~oasis1). PACKAGES not uploaded to oasis-db need to be regenerated. In order not to break 3rd party tools that consider a tarball constant, I recommend to create a new version. Thanks to INRIA OCaml team for synchronizing with us on this point. Major changes: You can now have the following example:
     ...
     Executable test_exec
       Install: false
       Build$: flag(tests)
       MainIs: testExec.ml
       BuildDepends: oUnit
     
     Test main
       Command: $test_exec
       TestTools: test_exec
     ...
The Run$: flag(tests) is implicit for the section Test main. The default value is false for tests. If all the executable for test are flagged correctly (Build$: flag(tests)), you'll get rid of the dependency on oUnit. It works the same for documentation, however the default is true. (Closes: #866) In order to allow interdependent flags, we transform back lazy values into unit -> string functions. This allows to change a flag value on the command line and to update all the dependent values. (Closes: #827, #938) It defines different ways to manage the auto-update of setup.ml: The choice between weak and dynamic depends on your need with regard to VCS and to the presence of oasis. The weak allow to checkout the project from VCS and be able to work on it, without the need of installing 'oasis' as long as you don't change the file _oasis. But it clutter your VCS history with changes to the build system each time you change something in _oasis. The 'dynamic' mode gives you no VCS history pollution but makes mandatory to have installed oasis libraries. Avoid copying executable to their real name. It helps to call ocamlbuild a single time for the whole build rather than calling it n time (n = number of executable sections) and copying resulting exec. This speeds up the build process because ocamlbuild doesn't have to compute/scan dependencies each time. The drawback is that you have to use $foo when you want to call Executable foo, because $foo will be '_build/.../main.byte'. For example:
CCOpt: -DEXTERNAL_EXP10 -L/sw/lib "-framework vecLib"
Will be parsed correctly and outputed according to target OS. In order to ease building oasis, we have minimize the number of dependencies. You only need to install ocamlmod, ocamlify and ocaml-data-notation for a standard build without tests. Dependencies on pcre, extlib and ocamlgraph has been dropped. The remaining dependencies are hidden behind a flag tests. OASIS now produces .cmxs file by default and add them to META. Now a META looks like:
     ...
     archive(byte) = "oasis.cma"
     archive(byte, plugin) = "oasis.cma"
     archive(native) = "oasis.cmxa"
     archive(native, plugin) = "oasis.cmxs"
     ...
This will ultimately help to generate automatically .cmxs for all oasis enabled projects. We hope that this new feature will improve dynamic linking use in OCaml (esp. for project like Ocsigen). Other changes Thanks to Anil Madhavapeddy, Pierre Chambart, Christophe Troestler, Jeremie Dimino, Ronan Le Hy, Yaron Minsky and Till Varoquaux for their help with this release. Also thanks to all the testers of the numerous release candidates. This was a long work and each time a tester has downloaded oasis has helped me to know that I was working for someone.

Sylvain Le Gall: Installing a new SSD on my laptop

3 years ago I bought a OCZ Core v2 SSD... I was very disappointed by write freeze, price and capacity. Recently a friend of mine, told me that I should try the new ones that has much improved in the meantime. So I decided to give another try to SSD on my laptop. I bought a Crucial M4 128GB SSD, because they seems to be of better quality than OCZ (and probably because of my first failed attempt). The price is correct (~130 ) and after some tests it seems quite good at write and read speed. So from this point of view, I am more convinced than my first attempt. Now, the real problem of the SSD is how to migrate your old data to your new drive. I have always upgraded my HD to a larger one and so a simple dd if=/dev/sda of=/dev/sdb bs=32M was enough... In the case of SSD you have to migrate to a smaller capacity. This includes to juggle a little bit with your partition to get something that fits. In order to improve a little bit the lifetime of my SSD, I have tried to follow various advice you can found on the Internet. One of this advice that seems to make sense, is to align your partition to a number modulo 2048 (rather than the default which is to start at 63). I followed this forum to do that. I end up with all partitions aligned to 2048. The GOOD news: doing data migration for Linux is a breeze. As a matter of fact, Linux support pretty well being move around, ending up on a different partition and so on... It just took me the time to transfer data, update grub and disk UUID and reboot. The BAD news: doing data migration for Windows is a hell. I always keep a Windows partition to test various OCaml software. But Windows doesn't like you like to be moved around. It especially doesn't like that the first sector of its partition get moved (i.e. from sector 63 to sector 2048). I spend a week trying to 'repair windows' but it was a pure waste of time. Thinkpad comes with a special ways to store passwords which is not compatible with the rescue mode of Windows, so you cannot access the repair mode -- because you need the Administrator password -- which is not readable... I decided to get back to the standard first sector on 63... Now Windows complained about 'hal.dll' missing. You know why ! Because the partition number has changed (from /dev/sda1 to /dev/sda3). Boot in Linux, run fdisk, enter expert mode, fix order -- after having remounted read-only the root partition and unmount everything else. Grub complains a little bit, fall into rescue mode. I fix that following these instructions and running 'grub-install --recheck /dev/sda'. So after 2 weeks of fights, I end up with a working windows (honestly, this is a pain, don't try to play with Windows partition, this OS is a lot more sensible sensitive than Linux).

26 April 2012

Sylvain Le Gall: On the importance of memory buffer...

I have been working since a long time on trying to solve random crashes on the OCaml Forge. I have first suspected failing hardware to be the cause of it, but found no evidence about that. In despair of a solution, I have installed cacti to investigate the issue. The good news is that it has allowed me to catch a pretty nice graph of a crash. Load average - Crash Just before the crash the average load was 200... My conclusion was that too many processes were running at the same time. So I start to hunt processes that load the server for nothing. One of them was darcsweb. This process doesn't really create a load, but it calls darcs for various operations and most of them are quite expensive. The most expensive one is darcs diff. I first turn on the caching of darcsweb, which already reduce the number of invocation of darcs. But it doesn't make a real difference (except that the website was faster to load). I continued to investigate. In order to reduce globally the load of the server, I installed various robots.txt to prevent crawler to call diff for all VCS on the forge. Crawler traffic is 10 times the normal traffic and indexing diff in Google is not very interesting. It doesn't make a real difference. Load average - Peaks I installed a script to analyze all the peak load above 2. The peaks happen every hour and are more or less important (from 4 to 10). I discovered that this peak were related to the hourly cronjob of FusionForge that fixed repository permissions. I was pretty surprise because updating permissions should no generate this load. This morning, I had a simple idea. There are a lot of process on the server and one of them (the bzr daemon, loggerhead) eats a big chunk of memory (at least never frees it). I just had a quick at the "buffers" memory... Only 4MB ! I just restarted loggerhead. The improvement made on robots.txt makes it a lot more stable with regard to memory consumption, since we don't run bzr diff anymore. The "buffers" memory is now at 100MB and guess what ! No more peaks... Here is the result: Load average - Restart Conclusion: sometimes the cause of a high load lies in a single number...

26 February 2011

Sylvain Le Gall: OCaml Debian News

... or don't shoot yourself in the foot. This is not a big secret, Debian Squeeze has been released. Right after this event, the OCaml Debian Task force was back in action -- with Stephane in the leading role. He has planned the transition to OCaml 3.12.0. We will proceed in two steps: a small transition of a reduced set of packages that can be transitioned before 3.12 and then the big transition. The reason for the small transition, is to avoid having to dep-wait (waiting for dependencies) of package upload by human. In -- a not so far -- past, the OCaml Debian Task force members were uploading packages by hand and waited for a full rebuild to go to the next step. This was long and cumbersome. We use now binNMU: it is binary only uploads -- with no source changes -- processed automatically by the release team and its infrastructure. This is far more effective and helps us to reduce the delay of the transition... The small transition is happening now!!! Don't update/upgrade your critical Debian installations with OCaml packages, you'll get a lot of removal if you do so. N.B. these removal are part of the famous Enforcing type-safe linking using package dependencies paper. As a side note, I am happy to announce that a full round of new OCaml packages has landed in Debian unstable: People aware of my current work, should notice that all the dependencies of OASIS are now in Debian unstable: ocaml-data-notation, ocamlify, ocaml-expect. This is a hint about the next OCaml Debian package, I will upload. You can also have a look at OASIS enabled packages (all the OASIS dependencies, ocaml-sqlexpr and ocaml-extunix). These packages have been generated using oasis2debian a tool to convert _oasis into debian/ packaging files. After these transition we will continue proceeding with standard upgrade work (e.g. camomile to 0.8.1). Sylvain Le Gall is an OCaml consultant working for OCamlCore SARL

23 December 2010

Rapha&#235;l Hertzog: People behind Debian: Mehdi Dogguy, release assistant

Mehdi Dogguy

Picture of Mehdi taken by Antoine Madet

Mehdi is a Debian developer for a bit more than a year, and he s already part of the Debian Release Team. His story is quite typical in that he started there by trying to help while observing the team do its work. That s a recurrent pattern for people who get co-opted in free software teams. Read on for more info about the release team, and Mehdi s opinion on many topics. My questions are in bold, the rest is by Mehdi (except for the additional information that I inserted in italics). Who are you? I m 27 years old. I grew up in Ariana in northern Tunisia, but have been living in Paris, France, since 2002. I m a PhD Student at the PPS laboratory where I study synchronous concurrent process calculi. I became interested in Debian when I saw one of my colleagues, Samuel Mimram (first sponsor and advocate) trying to resolve #440469, which is a bug reported against a program I wrote. We have never been able to resolve it but my intent to contribute was born there. Since then, I started to maintain some packages and help where I can. What s your biggest achievement within Debian? I don t think I had time to accomplish a lot yet :) I ve been mostly active in the OCaml team where we designed a tool to compute automatically the dependencies between OCaml packages, called dh-ocaml. This was a joint work with St phane Glondu, Sylvain Le Gall and Stefano Zacchiroli. I really appreciated the time spent with them while developing dh-ocaml. Some of the bits included in dh-ocaml have been included upstream in their latest release. I ve also tried to give a second life to the Buildd Status Pages because they were (kind of) abandoned. I intend to keep them alive and add new features to them. If you had a wand and could change one thing in Debian, what would that be? Make OCaml part of a default Debian installation :D But, since I m not a magician yet, I d stick to more realistic plans:
  1. A lot of desktop users fear Debian. I think that the Desktop installation offered by Debian today is very user-friendly and we should be able to attract more and more desktop users. Still, there is some work to be done in various places to make it even more attractive. The idea is trying to enhance the usability and integration of various tools together. Each fix could be easy or trivial but the final result would be an improved Desktop experience for our users. Our packaged software run well. So, each person can participate since the most difficult part is to find the broken scenarios. Fixes could be found together with maintainers, upstream or other interested people.

    I ll try to come up with a plan, a list of things that need polishing or fixes and gather a group of people to work on it. I d definitely be interested in participating in such a project and I hope that I ll find other people to help. If the plan is clear enough and has well described objectives and criteria, it could be proposed to the Release Team to consider it as a Release Goal for Wheezy.

  2. NMUs are a great way to make things move forward. But, sometimes, an NMU could break things or have some undesirable effects. For now, NMUers have to manually track the package s status for some time to be sure that everything is alright. It could be a good idea to be auto-subscribed to the bugs notifications of NMUed packages for some period of time (let s say for a month) to be aware of any new issues and try to fix them. NMUing a package is not just applying a patch and hitting enter after dput. It s also about making sure that the changes are correct and that no regressions have been introduced, etc

  3. Orphaned packages: It could be considered as too strict and not desired, but what about not keeping orphaned and buggy packages in Testing? What about removing them from the archive if they are buggy and still unmaintained for some period? Our ftp archive is growing. It could make sense to do some (more strict) housekeeping. I believe that this question can be raised during the next QA meeting. We should think about what we want to do with those packages before they rot in the archive.
[Raphael Hertzog: I would like to point out that pts-subscribe provided by devscripts makes it easy to temporarily subscribe to bug notifications after an Non-Maintainer Upload (NMU).] You re a Debian developer since August 2009 and you re already an assistant within the Release Management team. How did that happen and what is this about? In the OCaml team, we have to start a transition each time we upload a new version of the OCaml compiler (actually, for each package). So, some coordination with the Release Team is needed to make the transition happen. When we are ready to upload a new version of the compiler, we ask the Release Team for permission and wait for their ack. Sometimes, their reply is fast (e.g. if their is no conflicting transition running), but it s not always the case. While waiting for an ack, I used to check what was happening on debian-release@l.d.o. It made me more and more interested in the activities of the Release Team. Then (before getting my Debian account), I had the chance to participate in DebConf9 where I met Luk and Phil. It was a good occasion to see more about the tools used by the Release Team. During April 2010, I had some spare time and was able to implement a little tool called Jamie to inspect the relations between transitions. It helps us to quickly see which transitions can run in parallel, or what should wait. And one day (in May 2010, IIRC), I got offered by Adam to join the team. As members of the Release Team, we have multiple areas to work on:
  1. Taking care of transitions during the development cycle, which means making sure that some set of packages are correctly (re-)built or fixed against a specific (to each transition) set of packages, and finding a way to tell Britney that those packages can migrate and it would be great if she also shared the same opinion. [Raphael Hertzog: britney is the name of the software that controls the content of the Testing distribution.]
  2. Paying attention to what is happening in the archive (uploads, reported RC bugs, etc ). The idea is to try to detect unexpected transitions, blocked packages, make sure that RC bug fixes reach Testing in a reasonable period of time, etc
  3. During a freeze, making sure that unblock requests and freeze exceptions are not forgotten and try to make the RC bug count decrease.
There are other tasks that I ll let you discover by joining the game. Deciding what goes (or not) in the next stable release is a big responsibility and can be incredibly difficult at times. You have to make judgement calls all the time. What are your own criteria? That s a very hard to answer question (at least, for me). It really depends on the case . I try to follow the criteria that we publish in each release update. Sometimes, an unblock request doesn t match those criteria and we have to decide what to accept from the set of proposed changes. Generally, new features and non-fixes (read new upstream versions) changes are not the kind of changes that we would accept during the freeze. Some of them could be accepted if they are not intrusive, easy and well defended. When, I m not sure I try to ask other members of the Release Team to see if they share my opinion or if I missed something important during the review. The key point is to have a clear idea on what s the benefit of the proposed update, and compare it to the current situation. For example, accepting a new upstream release (even if it fixes some critical bugs) is taking a risk to break other features and that s why we (usually) ask for a backported fix. It s also worth noticing that (most of the time) we don t decide what goes in, but (more specifically) what version of a given package goes in and try to give to the contributors an idea on what kind of changes are acceptable during the freeze. There are some exceptions though. Most of them are to fix a critical package or feature. Do you have plans to improve the release process for Debian Wheezy? We do have plans to improve every bit in Debian. Wheezy will be the best release ever. We just don t know the details yet :) During our last meeting in Paris last October, the Release Team agreed to organize a meeting after Squeeze s release to discuss (among other questions) Wheezy s cycle. But the details of the meeting are not fixed yet (we still have plenty of time to organize it and other more important tasks to care about). We would like to be able to announce a clear roadmap for Wheezy and enhance our communication with the rest of the project. We certainly want to avoid what happened for Squeeze. Making things a bit more predictable for developers is one of our goals. Do you think the Constantly Usable Testing project will help? The original idea by Joey Hess is great because it allows d-i developers to work with a stable version of the archive. It allows them to focus on the new features they want to implement or the parts they want to fix (AIUI). It also allows to have constantly available and working installation images. Then, there is the idea of having a constantly usable Testing for users. The idea seems nice. People tend to like the idea behind CUT because they miss some software disappearing from Testing and because of the long delays for security fixes to reach Testing. If the Release Team has decided to remove a package from Testing, I think that there must be a reason for that. It either means that the software is broken, has unfixed security holes or was asked for the removal by its maintainer. I think that we should better try to spend some time to fix those packages, instead of throwing a broken version in a new suite. It could be argued that one could add experimental s version in CUT (or sid s) but every user is free to cherry-pick packages from the relevant suite when needed while still following Testing as a default branch. Besides, it s quite easy to see what was removed recently by checking the archive of debian-testing-changes or by querying UDD. IMO, It would be more useful to provide a better interface of that archive for our users. We could even imagine a program that alerts the user about installed software that got recently removed from Testing, to keep the user constantly aware any issue that could affect his machine. About the security or important updates, one has to recall the existence of Testing-security and testing-proposed-updates that are used specifically to let fixes reach Testing as soon as possible when it s not possible to go through Unstable. I m sure that the security team would appreciate some help to deal with security updates for Testing. We also have ways to speed migrate packages from Unstable to Testing. I have to admit that I m not convinced yet by the benefits brought by CUT for our users.
Thank you to Mehdi for the time spent answering my questions. I hope you enjoyed reading his answers as I did. Subscribe to my newsletter to get my monthly summary of the Debian/Ubuntu news and to not miss further interviews. You can also follow along on Identi.ca, Twitter and Facebook.

2 comments Liked this article? Click here. My blog is Flattr-enabled.

22 October 2010

Sylvain Le Gall: Compiling pcre-ocaml with Visual Studio 2008

One big change in the recent OASIS v0.2 release is the replacement of Str by Pcre. The big advantage of Pcre is that it can be used in multi-threaded environment, whereas it is not recommended to do so with Str. Since OASIS is used by the OCsigen part of OASIS-DB, we need to make it work safely with Lwt with multiple users at the same time. Note that, we can probably use Str directly with Lwt, because it is not really multi-threaded. But we want to be safe on this point and Pcre is a very powerful library. But the OCaml Pcre library depends on an external C library (pcre). This is not a problem on Linux et al, where it is shipped with the OS by default. But on Windows, you need to build it yourself. We want to do it using Microsoft Visual Studio 2008, mainly because OCaml was compiled with it -- and it seems the most natural way to build C library on Windows. As usual, building Open Source C libraries using MS Visual Studio, is not the most common way to process. However, the use of CMake enables a simple way to do it. Here is how we can compile pcre and pcre-ocaml for Windows: CMake-GUI with pcre Visual Studio 2008 with pcre The best way to test your newly created OCaml pcre library is to try to compile an example from the examples directory of pcre-ocaml itself. Sylvain Le Gall is an OCaml consultant at OCamlCore SARL

20 October 2010

Sylvain Le Gall: Unison on windows tips

The big advantage of Unison on Windows is that it allows quite easily to synchronize between Windows and Linux. For those who need to work on Windows with the same set of files as on Linux, this is a big plus. Other tools do it as well, but the 2-way sync of Unison is quite nice. When you need to compile a software on Linux and Windows, you can modify both side at the same time and (almost) don't have problems. On Windows, the .unison and unison.log are located into your %HOMEPATH%, which the upper directory of the classic Documents folder. In the directory .unison, you will find the .prf files that describe your unison profiles. As usual, default.prf in this directory is the default profile. Basic The basic tips are: Disable directory indexing You can also disable virus live scan -- if you think it is safe!!!! SSH Using ssh under Windows is always a challenge. As a matter of fact, this tool doesn't match Windows context and it is not as integrated as in Linux/BSD. There is Putty which can help you. It has a good support for remote shell but it is not very easy to setup with Unison. Putty and OpenSSH doesn't have precisely the same set of options and Unison relies on some not available in Putty. There is a script called ssh2plink.bat that can help you using Putty's plink as with Unison. I used it for a while, but this didn't give the expected throughput. The best option is to use the ssh command provided by Cygwin. In this case you have at the same time good throughput and unison integration. I explain here how to configure you Cygwin's ssh to use a SSH key. You can bypass the following steps, if you wish to use a password or if you have already setup your ssh to connect to the target computer. Launch Cygwin's setup.exe and select openssh for installation. To add a SSH key, launch the cygwin shell:
$ ssh-keygen -t rsa
Generating public/private rsa key pair.
[...]
Copy the file .ssh/id_rsa.pub to your target computer's .ssh/authorized_keys. You should be aware that the file format can be Windows EOL style (in this case use dos2unix to convert the file) and if you copy/paste from a dos box, some end of lines are added and you should remove them from the authorized_keys, to have a single line key. Once, you have installed your ssh key into target computer, you should try to connect directly from the cygwin shell.
$ ssh XXX
Now, you can configure sshcmd = c:\cygwin\bin\ssh.exe to your default.prf. Using Cygwin's ssh allow you to get ~2MB/s (or more) when you only get ~100KB/s using ssh2plink.bat. If you have any other tips to improve Unison on Windows, I will be happy to test them and post it here.

9 October 2010

Sylvain Le Gall: ocaml-gettext 0.3.0 is released

After three years, a new version of ocaml-gettext is out. I know that the delay is quite long... Been too busy to do anything regarding this. This new version is mainly a bug fix release: However, the major event of this release is the fact that it is now compatible with OCaml 3.10.0. This is a big news, because string extraction heavily rely on camlp4, which has changed a lot. I have also uploaded a debian package of ocaml-gettext. Plans for next release: Homepage Download

Sylvain Le Gall: Waiting for her in ~1month

My wife is pregnant and we are expecting our second baby's arrival in about a month. Last time, she came back from her preparation lessons with a adverstisement "baby shower" gift pack. One of them catch my attention: English translation: be prepared to offer him the best... English translation: because Debian guarantee the quality and offering the quality is a proof of love OK, the swirl is the other way and I made a 5 minutes GIMP modification to cut-and-paste the Debian logo. But the message is here! Remember me: For those interested, the real thing is a soap called NUK(R).

Sylvain Le Gall: OCaml Meeting, FOSDEM and holiday

Next week will be quiet busy. I will travel a lot. Of course, I am talking about OCaml Meeting and FOSDEM. The OCaml Meeting subscription is over and we reach a quiet good number of attendees: 45. This is the approximate number of people who come last year. There is now some details to finish (dinner for the previous day, schedule for talks...) but enough time left. The only thing I hope for the meeting is that it generates the same effect as last year: another impulse in the OCaml community. Many people are working continuously on OCaml but last year event has generated a lot of post-event discussion and many people started different project. Some of these projects are still alive, some are not as vivid as they deserve. I enjoy being at FOSDEM last year, but I only stay 1 day and it was not enough. I will be able to stay during the 2 days and attend more Debian conference than last year. I will visit Brussels and Ghent just after. Belgium is a really nice country and it has been a long time I haven't take a break. I'm going to FOSDEM, the Free and Open Source Software Developers' European Meeting

Sylvain Le Gall: LLVM, OCaml and Debian

I hope some people from the OCaml community will enjoy this changelog, extracted from llvm 2.6-7, which has just been uploaded:
  [ Arthur Loiret ]
   
  [...]
  [ Sylvain Le Gall ]
  * Build a libllvm-ocaml-dev package, which contains the OCaml binding:
    Closes: #568556.
    - debian/debhelper.in/libllvm-ocaml-dev. dirs,doc-base,install,META : Add.
    - debian/control.in/source: Build-Depends on ocaml-nox (>= 3.11.2),
      ocaml-best-compilers   ocaml-nox, dh-ocaml (>= 0.9.1).
    - debian/packages.d/llvm.mk:
      + (llvm_packages): Add libllvm-ocaml-dev.
      + (libllvm-ocaml-dev_extra_binary): Define, install META file.
    - debian/rules.d/binary.mk: Add dh_installdirs and dh_ocaml.
    - debian/rules.d/vars.mk:
      + include /usr/share/ocaml/ocamlvars.mk.
      + Configure with --with-ocaml-libdir=$(OCAML_STDLIB_DIR)/llvm.
  * debian/rules.d/build.mk: Fix symlinks pointing to the $DESTDIR.
In other words: LLVM is now built with its OCaml bindings and a META file for findlib. It will take some days before reaching every architectures, but hopefully it will be in Squeeze (next Debian stable release). Thanks to Arthur Loiret for the quick upload.

Sylvain Le Gall: OCaml cryptokit and Java PBEWithMD5AndDES

During one of my project I need to interact with Java cryptographic extension. Some data has been encrypted using PBEWithMD5AndDES. I need to access it from OCaml. I take a look at available cryptographic extension in the Debian project for OCaml: cryptgps and cryptokit. I choose cryptokit, because its author is well known: Xavier Leroy. This article was my starting point. Of course, I keep in mind that the reference is there and that there is a good article covering it. Here is the result in OCaml:
 let decrypt passphrase salt ?(iterationCount=41) str =
   let key, iv =
     let rec hash_aux iter str =
       if iter > 0 then
         (* Rehash string *)
         hash_aux
           (iter - 1)
           (hash_string
              (Hash.md5 ())
              str)
       else
         (* Key = first 8 bytes of the MD5 hash *)
         String.sub str 0 8,
         (* IV = last 8 bytes of the MD5 hash *)
         String.sub str 8 8
     in
       (* Hash n times combination of passphrase and salt,
           return key and iv 
         *)
       hash_aux
         iterationCount
         (passphrase ^ salt)
   in
     transform_string
        (Cipher.des
           ~pad:Padding.length
           ~iv:iv
           key
           Cipher.Decrypt)
       str
The only missing information was the pad algorithm to use (Padding.length). For this piece of information, I need to browse the RSA documentation and test a little bit. Rewriting PBEWithMD5andDES is quite straightforward with cryptokit and OCaml. It takes 25 lines with C# and OCaml (only counting LoC, no comment, no empty constructor or declaration in C#). I was thinking that this task will require 2 or 3 days, but it has been done in 4 hours... Many thanks to cryptokit ;-)

10 September 2010

Sylvain Le Gall: Dirty fix for omlet vim extension

omlet (or here) is a vim extension for writing OCaml code. In my opinion, it has a better indentation than the standard OCaml vim support. Unfortunately, it has a cost: the indentation vim code is more complex. And it has a few bugs :-( The main bug is that it doesn't like unbalanced comment opening "(*" and closing "*)" tags. From time to time, it enters an infinite (or very long) loop when there such a tag left in your file. It can be very far away from the point you are editing. It isn't too problematic, because unbalanced tags are a syntax error. But the problem is that it matches these tags inside strings also. So whenever you start using regular expression like "(.*)" the whole indentation fails. But there is a very ugly solution to this problem! Problematic code:
 let parse_rgxp =
   Pcre.regexp ~flags:[ CASELESS] 
     "^(?<license>[A-Z0-9\\-]*[A-Z0-9]+)\
      (?<version>-[0-9\\.]+)?(?<later>\\+)?\
      ( *with *(?<exception>.*) *exception)?$"
Solution, add ignore"(*":
 let parse_rgxp =
   ignore "(*";
   Pcre.regexp ~flags:[ CASELESS] 
     "^(?<license>[A-Z0-9\\-]*[A-Z0-9]+)\
      (?<version>-[0-9\\.]+)?(?<later>\\+)?\
      ( *with *(?<exception>.*) *exception)?$"
Very very ugly coder: you balance comment tags in dead code -- very very bad ;-) ps: another solution when the plugin enters the infinite loop, hit Ctrl-C. This will stop it and let you define your own indentation.

1 September 2010

Sylvain Le Gall: OCaml 3.12 with Debian Sid right now!

Some careful readers of Planet OCamlCore should wonder why the OCaml packages in Debian has not yet been upgraded to 3.12.0. For the Planet Debian readers, this is the latest version of the Objective Caml programming language. The answer is simple: Debian Squeeze froze on 6th August. This means that Debian folks focus on fixing release critical bugs and avoid doing big transitions in unstable (Sid). In particular, the Debian OCaml maintainers has decided to keep OCaml 3.11.2 for Squeeze, because the delay was really too short: OCaml 3.12 was out on 2nd August. A great work has already been done by S. Glondu and the rest of the Debian OCaml maintainers to spot possible problems. The result was a series of bugs submitted to the Debian BTS. This effort has started quite early and have been updated with various OCaml release candidates. S. Glondu has also built an unofficial Debian repository of OCaml 3.12.0 packages here. Let's use it to experiment with OCaml 3.12.0. schroot setup Following my last post about schroot and CentOS, we will use a schroot to isolate our installation of unofficial OCaml 3.12.0 packages. approx approx is a debian caching proxy server for Debian archive files. It is very effective and simple to setup. It is already on my server (Debian Lenny, approx v3.3.0). I just have to add a single line to create a proxy for ocaml 3.12 packages:
 $ echo "ocaml-312   http://ocaml.debian.net/debian/ocaml-3.12.0" >> /etc/approx/approx.conf
 $ invoke-rc.d approx restart
approx is written in OCaml, if you want to know how I come to it. debootstrap and schroot We create a chroot environment with Debian Sid:
# PROXY = host where approx is installed, debian/ points to official Debian repository of 
# your choice. 
$ debootstrap sid sid-amd64-ocaml312 http://PROXY:9999/debian
We create a section for sid-amd64-ocaml312 in /etc/schroot/schroot.conf (Debian Lenny):
[sid-amd64-ocaml312]
description=Debian sid/amd64 with OCaml 3.12.0
type=directory
location=/srv/chroot/sid-amd64-ocaml312
priority=3
users=XXX
root-groups=root
run-setup-scripts=true
run-exec-scripts=true
Replace XXX by your login. And we install additional softwares:
 $ schroot -c sid-amd64-ocaml312 apt-get update
 $ schroot -c sid-amd64-ocaml312 apt-get install vim-nox sudo
OCaml 3.12 packages Now we can start the setup to access OCaml 3.12.0 packages. The repository is signed by S. Glondu GPG key (see here). We need to get it and inject it into apt:
$ gpg --recv-key 49881AD3 
gpg: requ te de la cl  49881AD3 du serveur hkp keys.gnupg.net
gpg: cl  49881AD3:   St phane Glondu <steph@glondu.net>   n'a pas chang 
gpg:        Quantit  totale trait e: 1
gpg:                      inchang e: 1
$ gpg -a --export 49881AD3 > glondu.gpg
$ schroot -c sid-amd64-ocaml312 apt-key add glondu.gpg
The following part is done in the schroot:
$ schroot -c sid-amd64-ocaml312
# PROXY = host where approx is installed
(sid-amd64-ocaml312)$ echo "deb http://PROXY:9999/ocaml-312 sid main" >> /etc/apt/sources.list
(sid-amd64-ocaml312)$ cat <<EOF >> /etc/apt/preferences
Package: *
Pin: release l=ocaml
Pin-Priority: 1001
EOF
(sid-amd64-ocaml312)$ apt-get update 
...
(sid-amd64-ocaml312)$ apt-cache policy ocaml
  Install : (aucun)
  Candidat : 3.12.0-1~38
 Table de version :
     3.12.0-1~38 0
       1001 http://atto/ocaml-312/ sid/main amd64 Packages
     3.11.2-1 0
        500 http://atto/debian/ sid/main amd64 Packages
(sid-amd64-ocaml312)$ apt-get install ocaml-nox libtype-conv-camlp4-dev libounit-ocaml-dev...
That's it. The apt-policy command shows that OCaml 3.12 for the ocaml-312 repository has an higher priority for installation. Good luck playing with OCaml 3.12.0.

26 August 2010

Sylvain Le Gall: CentOS 5 chroot with schroot

OCaml compiles native executables in static mode. It allows to have a minimal set of dependencies when delivering an executable. It has also disadvantages like the size of the executable and problems arising when considering libraries update -- but this is another topic. There is still one strong dependency that you should not forget when you want to deliver a product for most of the Linux distributions: dependency on the glibc version. Trying to run OASIS compiled with Debian Lenny, on CentOS 5.5:
$ OASIS
.../OASIS: /lib64/libc.so.6: version  GLIBC_2.7' not found (required by .../OASIS)
So when compiling for delivery, one should choose the oldest distribution he targets. In my case, I choose CentOS 5 which comes with glibc v2.5. I usually choose Debian stable at the moment of writing Debian Lenny. But for now, the Debian Lenny's glibc is newer (v2.7) than the one coming from the CentOS 5.5 stable release. CentOS is a Red Hat like Linux distribution. I use a Debian Lenny amd64 host system and I decided to setup a chroot of CentOS 5 i386 and amd64. I also setup schroot to use my CentOS chroot. CentOS 5 amd64 setup First of all we use rinse, which can setup a RPM based distribution in a chroot. The version v1.3 shipped with Debian Lenny has some bugs: it doesn't install nss and other mandatory packages. So I downloaded v1.7 directly from Debian Sid. There is no dependencies problems and the package is arch:all, so it is straightforward to install:
$ wget http://ftp.de.debian.org/debian/pool/main/r/rinse/rinse_1.7-1_all.deb # Replace ftp.de.debian.org by your preferred Debian mirror
$ dpkg -i rinse_1.7-1_all.deb
Then I create the chroot directory and launch rinse:
$ mkdir /srv/chroot/centos5-amd64
$ rinse --arch amd64 --distribution centos-5 --directory /srv/chroot/centos5-amd64 # N.B. you must use --arch, the default is i386
Once installation is complete, you can add an entry for this distribution in /etc/schroot/schroot.conf:
[centos5-amd64]
description=Centos 5 (amd64)
location=/srv/chroot/centos5-amd64
priority=3
users=XXX
groups=
root-groups=root
type=directory
run-setup-scripts=true
run-exec-scripts=true
Replace XXX by your login. If you try to login directly, you will get warnings:
$ schroot -c centos5-i386
I : [chroot centos5-i386-a952de23-7f4b-4bae-a9b9-752ecee4a185] Ex cution de l'interpr teur de commandes initial :  /bin/bash 
-bash: /dev/null: Permission denied
-bash: /dev/null: Permission denied
-bash: /dev/null: Permission denied
-bash: /dev/null: Permission denied
-bash: /dev/null: Permission denied
This is a bit misleading because the real problem is that nothing is created in /dev/. CentOS delegates creating char/block devices to udev. You have two solutions to solve this issue:
$ MAKEDEV random
$ MAKEDEV console
$ MAKEDEV zero
$ MAKEDEV null
$ MAKEDEV stdout
$ MAKEDEV stdin
$ MAKEDEV stderr
$ rsync -av /srv/chroot/lenny-amd64/dev/* /srv/chroot/centos5-amd64/dev/
That's it, you now have a functional chrooted CentOS 5 environment:
$ schroot -c centos5-amd64 cat /etc/redhat-release
I : [chroot centos5-amd64-b9bae264-285b-4d17-a046-13386736cecd] Ex cution de la commande :   cat /etc/redhat-release  
CentOS release 5.5 (Final)
CentOS 5 i386 setup To setup an i386 environment, we follow almost the same scheme, except we need to fix a bug in rinse v1.7: we need to call linux32 before executing chroot. The problem is that the first stage installation of rinse install an i386/686 environment but as soon as you call chroot yum install ..., it will guess that the system is amd64 and will install missing packages. See the Debian bug report and the example patch attached to correct this behavior. WARNING: this patch is just an example, you can apply it for creating CentOS i386 chroot on Lenny amd64 host but you should remove the patch as soon as the installation is complete.
$ mkdir /srv/chroot/centos5-i386/
$ rinse --arch i386 --distribution centos-5 --directory /srv/chroot/centos5-i386 # With /usr/lib/rinse/centos-5/post-install.sh patched 
$ rsync -av /srv/chroot/lenny-i386/dev/* /srv/chroot/centos5-i386/dev/
Add this distribution to /etc/schroot/schroot.conf:
[centos5-i386]
description=Centos 5 (i386)
location=/srv/chroot/centos5-i386
priority=3
users=XXX
groups=
root-groups=root
type=directory
run-setup-scripts=true
run-exec-scripts=true
personality=linux32
You now have a schroot of CentOS 5 i386:
$ schroot -c centos5-i386 cat /etc/redhat-release
I : [chroot centos5-i386-9acafa91-9862-4488-aaef-4ab2a482771e] Ex cution de la commande :  cat /etc/redhat-release 
CentOS release 5.5 (Final)
Happy schroot hacking!

2 August 2010

Sylvain Le Gall: OCaml 3.12.0 is out: watch the movie

I have been quite busy the few last months. But anyway, I found time and solved various technical pitfalls to be able to bring you the first movie of the OCaml Meeting: Foreword by X. Leroy at OCaml Meeting 2010 (subtitle: OCaml 3.12.0 features presentation) In this video, Xavier Leroy told us about the features in OCaml 3.12.0. This version is now released so it is high time to release the matching movie. I will release other movies of the OCaml Meeting during August and will try to explain the various pitfalls I encounter -- and the OCaml solutions I use to solve them.

13 July 2010

Sylvain Le Gall: Project children has forked

The new project shares the same code base, but this branch is a total rewrite of the project children. Clementine is born on 8th July 2010, at 12:01 Paris time. She is now at home with her mother, father and brother. She is very peaceful and hardly cry once or twice in a day: at bath time and around 5am for her night lunch. Clementine by Bernard Le Gall I will probably lack of time in the coming month. Expect some delays with the OCaml forge, other OCaml projects I maintain and my OCaml Debian packages.

1 July 2010

Sylvain Le Gall: 5 years old CD-RW...

This blog post is a kind of followup of a previous blog post about reading 10 years old CD-R. Today, I get a new shiny ASRock ION 330 to play with, in itself this PC has nothing particular. As usual, I begin by using System Rescue CD to check bad blocks on the hard drive, before working with it (badblocks -v -s -w -c 4096 /dev/sda). Once done, I will check the memory (memtest). This is my standard procedure when receiving a new PC. But I don't have my System Rescue CD at hand. No problem, I download it and and write it on an old CD-RW, bought 5 years ago but still sealed into his blister. For your information, this is a Memorex CD-RW 700MB 16-24x. I blank it (wodim blank=all). I write it at speed 16x, because I don't know how to lower the speed. After writing the CD, I re-read it to check everything is fine (readom dev=/dev/cdrw1 f=test.iso). Ooops:
$ readom dev=/dev/cdrw1 f=test.iso
[...] 
Errno: 5 (Input/output error), read_g1 scsi sendcmd: no error
CDB:  28 00 00 01 3C 00 00 00 40 00
status: 0x2 (CHECK CONDITION)
Sense Bytes: 70 00 03 00 00 00 00 0A 00 00 00 00 11 05 00 00
Sense Key: 0x3 Medium Error, Segment 0
Sense Code: 0x11 Qual 0x05 (l-ec uncorrectable error) Fru 0x0
Sense flags: Blk 0 (not valid) 
cmd finished after 6.614s timeout 40s
readom: Input/output error. Cannot read source disk
readom: Retrying from sector 80896.
.............
readom: Error on sector 80908 corrected after 9 tries. Total of 0 errors.
..............---
OK, I re-blank it (wodim blank=fast) and rewrite it. The CD seems to be readable this time. But when I try to boot with it my new ASRock, it fails. Enough is enough, I get an heavy used Verbatim 8cm CD-RW. Just to show you how old he is: the first ISO written on it was "Debian Sarge". And the magic happens: it works on the first try!!!! You will probably think that I work for Verbatim. This is not true but I must admit that first-price CD-RW/CD-R seems to be very low quality and don't hold data in time. This is probably related to the chemical compounds of the CD. The failing Memorex CD-RW will end up in a trash and I will continue to use my 5 years old Verbatim CD-RW. Dear reader, in order to find a way not to dump this failing CD-RW, do you know a way to make it to work? What you should now about the failing CD-RW:

9 June 2010

Sylvain Le Gall: Waiting for her in ~1month

My wife is pregnant and we are expecting our second baby's arrival in about a month. Last time, she came back from her preparation lessons with a adverstisement "baby shower" gift pack. One of them catch my attention: English translation: be prepared to offer him the best... English translation: because Debian guarantee the quality and offering the quality is a proof of love OK, the swirl is the other way and I made a 5 minutes GIMP modification to cut-and-paste the Debian logo. But the message is here! Remember me: For those interested, the real thing is a soap by NUK(R).

Next.

Previous.